inherent bias
SHIELD: Suppressing Hallucinations In LVLM Encoders via Bias and Vulnerability Defense
Huang, Yiyang, Shi, Liang, Zhang, Yitian, Xu, Yi, Fu, Yun
Large Vision-Language Models (LVLMs) excel in diverse cross-modal tasks. However, object hallucination, where models produce plausible but inaccurate object descriptions, remains a significant challenge. In contrast to previous work focusing on LLM components, this paper is the first to trace LVLM hallucinations to visual encoders and identifies three key issues: statistical bias, inherent bias, and vulnerability. To address these challenges, we propose SHIELD, a training-free framework that mitigates hallucinations through three strategies: re-weighting visual tokens to reduce statistical bias, introducing noise-derived tokens to counter inherent bias, and applying adversarial attacks with contrastive decoding to address vulnerability. Experiments demonstrate that SHIELD effectively mitigates object hallucinations across diverse benchmarks and LVLM families. Moreover, SHIELD achieves strong performance on the general LVLM benchmark, highlighting its broad applicability. Code will be released.
- Health & Medicine (0.46)
- Information Technology > Security & Privacy (0.34)
- Government > Military (0.34)
- Information Technology > Artificial Intelligence > Vision (1.00)
- Information Technology > Artificial Intelligence > Machine Learning (1.00)
- Information Technology > Artificial Intelligence > Representation & Reasoning (0.93)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.91)
Interview with Mahammed Kamruzzaman: Understanding and mitigating biases in large language models
In a series of interviews, we're meeting some of the AAAI/SIGAI Doctoral Consortium participants to find out more about their research. In this latest interview, we hear from Mahammed Kamruzzaman, who is looking into biases in large language models. We find out about his research so far during the PhD, what he is planning to investigate next, and what inspired him to focus on this aspect of the field. I am currently pursuing my PhD at the University of South Florida in the Department of Computer Science and Engineering. My research focuses on understanding and mitigating biases in Large Language Models (LLMs), particularly how these biases manifest across various sociodemographic and cultural dimensions.
Systematic Biases in LLM Simulations of Debates
Taubenfeld, Amir, Dover, Yaniv, Reichart, Roi, Goldstein, Ariel
Recent advancements in natural language processing, especially the emergence of Large Language Models (LLMs), have opened exciting possibilities for constructing computational simulations designed to replicate human behavior accurately. However, LLMs are complex statistical learners without straightforward deductive rules, making them prone to unexpected behaviors. In this study, we highlight the limitations of LLMs in simulating human interactions, particularly focusing on LLMs' ability to simulate political debates. Our findings indicate a tendency for LLM agents to conform to the model's inherent social biases despite being directed to debate from certain political perspectives. This tendency results in behavioral patterns that seem to deviate from well-established social dynamics among humans. We reinforce these observations using an automatic self-fine-tuning method, which enables us to manipulate the biases within the LLM and demonstrate that agents subsequently align with the altered biases. These results underscore the need for further research to develop methods that help agents overcome these biases, a critical step toward creating more realistic simulations.
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- Asia > Middle East > Israel > Jerusalem District > Jerusalem (0.04)
- North America > Dominican Republic (0.04)
- Europe > United Kingdom > England (0.04)
- Government > Immigration & Customs (0.98)
- Health & Medicine (0.67)
- Energy (0.67)
The WGA's AI Wins are Good--But They're Not Enough
I've been in the entertainment industry since I was nine. I joined the Screen Actors Guild (SAG) when I was 11 in 1977, the Writers Guild of America (WGA) when I was 22, and the Directors Guild of America (DGA) the following year. I got my start as a child actor on Broadway, studied film at NYU, then went on to act in movies like The Lost Boys and the Bill & Ted franchise while writing and directing my own narrative work. I've lived through several labor crises and strikes, but none like our current work shutdown, which began last spring when all three unions' contracts were simultaneously due for renegotiation and the Alliance of Motion Picture and Television Producers (AMPTP) refused their terms. The unifying stress point for labor is the devaluing of the worker, which reached a boiling point with the rapid advancement of highly sophisticated and ubiquitous machine learning tools. Actors have been replaced by AI replications of their likenesses, or their voices have been stolen outright.
- Media > Film (1.00)
- Leisure & Entertainment (1.00)
ChatGPT Causes Cheating Concerns In Higher Ed - The New Paltz Oracle
ChatGPT, an AI program capable of writing in a human-like format, brings uncertainty in higher level education, specifically with its ability to replicate educated responses. The AI has been used in recent months by students attending universities to write academic papers without so much as typing a paragraph. Schools have been made aware of this and are taking action based on each school's plagiarism policies and honor codes. Although having an AI to write papers for you may seem convenient, there are multiple issues that have been brought to light that insinuate the software does not come without drawbacks. ChatGPT was created by the start-up company OpenAI, a San Francisco based company with a close relation with Microsoft, on Nov. 30 2022. Schools across the country are combating the use of AI by blocking the software from school networks, such as the institutions' WiFi networks and school issued computers.
- North America > United States > California > San Francisco County > San Francisco (0.25)
- North America > United States > New York (0.05)
What do machine learning algorithms mean for healthcare?
Also known as drug safety, it relates to the'collection, detection, assessment, monitoring, and prevention' of adverse effects of pharmaceutical products. For obvious reasons it's important for clinicians to know how the drugs they prescribe to patients will interact with other medications they're already on. The wrong combination could reduce the effectiveness of the medication or increase the risk of side effects. It's the reason why the process is heavily vetted and certified. I've been a doctor since 1999 and a GP since 2008.
Not just for big business: how AI went mainstream - Raconteur
Not so long ago, AI was the preserve of the largest organisations, mainly because of its cost and complexity. But this is starting to change. As the technology becomes more affordable, the largest hosting providers, such as Microsoft, Amazon and Google, are opening up access to shared resources and pre-packaged AI systems with offerings aimed at smaller businesses. With AI becoming sophisticated enough to program itself, some leading technology providers are even delving into the world of'citizen developers', as David Shrier, professor of practice, AI and innovation at Imperial College Business School, explains. "This capability is growing closer. Under such a model, a small business owner would rent AI capacity from a large tech company and describe a problem verbally to the AI. The computer would then write a program for itself to solve that problem," he says.
- Information Technology > Security & Privacy (0.75)
- Education > Educational Setting (0.52)
3 ways to effectively demystify the AI black box
Artificial intelligence has demonstrated immense promise when applying machine learning to support the overall processing of large datasets, particularly in the banking and financial services industry. Sixty percent of financial services companies have implemented at least one form of AI, ranging from virtual assistants communicating with customers and the automation of workflows to managing fraud and network security. Despite these advancements in efficiency and automation, complexities from the inner workings of AI models often create a "black box" issue. This largely stems from lack of understanding of how the system works and a continual concern around opacity, unfair discrimination, ethics and dangers to privacy and autonomy. In fact, the lack of transparency in system operation is frequently linked to hidden biases.
Think there's no bias in your hiring process? AI says think again - HR Executive
When Jahanzaib Ansari was looking for work in 2016, his resume was not the problem. Despite a CV boasting experience as a programmer and attending the University of Toronto, Ansari's job search soon hit a dead end. At the suggestion of a friend, he changed his first name on his resume and saw almost immediate results. "I wouldn't hear back from employers until my [colleague] said, 'Why don't you just Anglicize it?' I went with variations of Jason, Jordan, Jacob, and literally in four to six weeks, I got a job," says the CEO of Knockri, a technology firm that created an artificial intelligence tool that aims to reduce bias in the hiring process.
How to mitigate bias in AI
As the common proverb goes, to err is human. One day, machines may offer workforce solutions that are free from human decision-making mistakes; however, those machines learn through algorithms and systems built by programmers, developers, product managers, and software teams with inherent biases (like all other humans). In other words, to err is also machine. Artificial intelligence has the potential to improve our lives in countless ways. However, since algorithms often are created by a few people and distributed to many, it's incumbent upon the creators to build them in a way that benefits populations and communities equitably.